Approximation of Optimization Problems and Learnability
نویسندگان
چکیده
We study optimization problems in a context derived from the studies about computational learnability. While an optimization problem can be hard to solve, it could be easy to approximate. But when the degree of approximation is checked by another approximating algorithm we can setup an interaction protocol in which the task is to nd the power of the checker, a task similar to that of learning a secret concept. The results are about relations between approximability and this new kind of learnability.
منابع مشابه
Learning From An Optimization Viewpoint
Optimization has always played a central role in machine learning and advances in the field of optimization and mathematical programming have greatly influenced machine learning models. However the connection between optimization and learning is much deeper : one can phrase statistical and online learning problems directly as corresponding optimization problems. In this dissertation I take this...
متن کاملNon-Lipschitz Semi-Infinite Optimization Problems Involving Local Cone Approximation
In this paper we study the nonsmooth semi-infinite programming problem with inequality constraints. First, we consider the notions of local cone approximation $Lambda$ and $Lambda$-subdifferential. Then, we derive the Karush-Kuhn-Tucker optimality conditions under the Abadie and the Guignard constraint qualifications.
متن کاملLearning Ordered Binary Decision Diagrams
This note studies the learnability of ordered binary decision diagrams (obdds). We give a polynomial-time algorithm using membership and equivalence queries that nds the minimum obdd for the target respecting a given ordering. We also prove that both types of queries and the restriction to a given ordering are necessary if we want minimality in the output, unless P=NP. If learning has to occur ...
متن کاملLinks Between Learning and Optimization: a Brief Tutorial
This report is a brief exposition of some of the important links between machine learning and combinatorial optimization. We explain how efficient ‘learnability’ in standard probabilistic models of learning is linked to the existence of efficient randomized algorithms for certain natural combinatorial optimization problems, and we discuss the complexity of some of these optimization problems.
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کامل